The most impressive on your list (e.g. Good) also are the earliest; in particular ‘intelligence explosion’ predates computational complexity theory which puts severe bounds on any foom scenarios.
I think there is a trend to this effect (although Solomonoff wrote about intelligence explosion in 1985). I wouldn’t point to computational complexity though, so much as general disappointment in AI progress.
How do you think I am misrepresenting Hutter? I agree that he is less influential than Good, and not one of the best-known names in AI. If you are talking about his views on possible AI outcomes, I was thinking of passages like the one in this Hutter paper:
Let us now consider outward explosion, where an increasing amount of
matter is transformed into computers of fixed efficiency (fixed comp per unit
time/space/energy). Outsiders will soon get into resource competition with the
expanding computer world, and being inferior to the virtual intelligences, probably
only have the option to flee. This might work for a while, but soon the expansion
rate of the virtual world should become so large, theoretically only bounded by the
speed of light, that escape becomes impossible, ending or converting the outsiders’
existence.
So while an inward explosion is interesting, an outward explosion will be a threat
to outsiders. In both cases, outsiders will observe a speedup of cognitive processes
and possibly an increase of intelligence up to a certain point. In neither case will
outsiders be able to witness a true intelligence singularity.
I think there is a trend to this effect. I wouldn’t point to computational complexity though, so much as general disappointment in AI progress.
Well, the self improvement would seem a lot more interesting if it was the case that P=NP or P=PSPACE , I’d say. As it is a lot of scary things are really well bounded—e.g. specific, accurate prediction of various nonlinear systems requires exponential knowledge, exponential space, and exponential number of operation, in a given forecast time. And the progress is so disappointing perhaps thanks to P!=NP and the like—the tasks do not have easy general solutions, or even general purpose heuristics.
re: quote
Ahh, that’s much better with regard to vagueness. He isn’t exactly in agreement with SI doctrine, though, and the original passage creates impression of support for the specific doctrine here.
It goes to say that optimistic AI researchers consider AI to be risky, which is definitely a good thing for the world but at the same time makes this rhetoric in the vibe of ‘other AI researchers are going to kill everyone, and we are the only hope of humanity’ look rather bad. The researchers that aren’t particularly afraid of AI seem to be working on fairly harmless projects which just aren’t coding for that sort of will to paperclip.
Suppose some group says that any practical nuclear reactor will intrinsically risk a multimegaton nuclear explosion. What could that really mean? One thing really: the approach that they consider practical will intrinsically risk a multimegaton nuclear explosion. It doesn’t say much about other designs, especially if that group doesn’t have a lot of relevant experience. Same ought to apply to SI’s claims.
‘other AI researchers are going to kill everyone, and we are the only hope of humanity’
Let me explicitly reject such rhetoric then.
The difficulty of safety is uncertain: it could be very easy for anyone with little time, or it could be quite difficult and demand a lot of extra work (which might be hard to put in given competitive pressures). The region where safety depends sensitively on the precautions and setup of early AI development (from realistic options) should not be much larger than the “easy for everyone region,” so trivially the probability for building AI with good outcomes should be distributed widely among the many possible AI building institutions: software firms, government, academia, etc. And since a small team is very unlikely to build AGI first, it can have at most only a very small share of the total expected probability of a good outcome.
A closed project aiming to build safe AI could have an advantage either by using more demanding safety thresholds and by the possibility of not publishing results that require additional work to make safe but could be immediately used for harm. This is the reasoning for classifying some kinds of work with dangerous viruses or nuclear technology or the like. This could provide some safety boost for such a project in principle, but probably not an overwhelming one.
Secrecy might also be obtained through ordinary corporate and government security, and governments in particular would plausibly be much better at it (the Manhattan Project leaked, but ENIGMA did not). And different safety thresholds matter most with respect to small risks (most institutions would be worried about large risks, whereas those more concerned with future generations might place extra weight on small risks). But small risks contribute less to expected value.
And I would very strongly reject the idea that “generic project X poses near-certain doom if it succeeds while project Y is almost certain to have good effects if it succeeds”: there’s just no way one could have such confident knowledge.
And the progress is so disappointing perhaps thanks to P!=NP and the like—the tasks do not have easy general solutions, or even general purpose heuristics.
You can still get huge differences in performance from software. Chess search explodes as you go deeper, but software improvements have delivered gains comparable to hardware gains: the early AI people were right that if they had been much smarter they could have designed a chess program to beat the human world champion using the hardware they had.
Part of this is that in chess one is interested in being better than one’s opponent: sure you can’t search perfectly 50 moves ahead, but you don’t have to play against an infinite-computing-power brute-force search, you have to play against humans and other computer programs. Finance, computer security, many aspects of military affairs, and other adversarial domains are pretty important. If you could predict the weather a few days further than others, you could make a fortune trading commodities and derivatives.
Another element is that humans are far from optimized to use their computation for chess-playing, which is likely true for many of the other activities of modern civilization.
Also, there’s empirical evidence from history and firm R&D investments that human research suffers from serial speed limits of human minds, i.e. one gets more progress from doubling time to work than the size of the workforce. This is most true in areas like mathematics, cryptography, and computer science, less true in areas demanding physical infrastructure built using the outputs of many fields and physically rate-limited processes. But if one can rush forward on those elements, there would then be an unprecedented surge of ability to advance the more reluctant physical technologies.
I think there is a trend to this effect (although Solomonoff wrote about intelligence explosion in 1985). I wouldn’t point to computational complexity though, so much as general disappointment in AI progress.
How do you think I am misrepresenting Hutter? I agree that he is less influential than Good, and not one of the best-known names in AI. If you are talking about his views on possible AI outcomes, I was thinking of passages like the one in this Hutter paper:
Well, the self improvement would seem a lot more interesting if it was the case that P=NP or P=PSPACE , I’d say. As it is a lot of scary things are really well bounded—e.g. specific, accurate prediction of various nonlinear systems requires exponential knowledge, exponential space, and exponential number of operation, in a given forecast time. And the progress is so disappointing perhaps thanks to P!=NP and the like—the tasks do not have easy general solutions, or even general purpose heuristics.
re: quote Ahh, that’s much better with regard to vagueness. He isn’t exactly in agreement with SI doctrine, though, and the original passage creates impression of support for the specific doctrine here.
It goes to say that optimistic AI researchers consider AI to be risky, which is definitely a good thing for the world but at the same time makes this rhetoric in the vibe of ‘other AI researchers are going to kill everyone, and we are the only hope of humanity’ look rather bad. The researchers that aren’t particularly afraid of AI seem to be working on fairly harmless projects which just aren’t coding for that sort of will to paperclip.
Suppose some group says that any practical nuclear reactor will intrinsically risk a multimegaton nuclear explosion. What could that really mean? One thing really: the approach that they consider practical will intrinsically risk a multimegaton nuclear explosion. It doesn’t say much about other designs, especially if that group doesn’t have a lot of relevant experience. Same ought to apply to SI’s claims.
Let me explicitly reject such rhetoric then.
The difficulty of safety is uncertain: it could be very easy for anyone with little time, or it could be quite difficult and demand a lot of extra work (which might be hard to put in given competitive pressures). The region where safety depends sensitively on the precautions and setup of early AI development (from realistic options) should not be much larger than the “easy for everyone region,” so trivially the probability for building AI with good outcomes should be distributed widely among the many possible AI building institutions: software firms, government, academia, etc. And since a small team is very unlikely to build AGI first, it can have at most only a very small share of the total expected probability of a good outcome.
A closed project aiming to build safe AI could have an advantage either by using more demanding safety thresholds and by the possibility of not publishing results that require additional work to make safe but could be immediately used for harm. This is the reasoning for classifying some kinds of work with dangerous viruses or nuclear technology or the like. This could provide some safety boost for such a project in principle, but probably not an overwhelming one.
Secrecy might also be obtained through ordinary corporate and government security, and governments in particular would plausibly be much better at it (the Manhattan Project leaked, but ENIGMA did not). And different safety thresholds matter most with respect to small risks (most institutions would be worried about large risks, whereas those more concerned with future generations might place extra weight on small risks). But small risks contribute less to expected value.
And I would very strongly reject the idea that “generic project X poses near-certain doom if it succeeds while project Y is almost certain to have good effects if it succeeds”: there’s just no way one could have such confident knowledge.
You can still get huge differences in performance from software. Chess search explodes as you go deeper, but software improvements have delivered gains comparable to hardware gains: the early AI people were right that if they had been much smarter they could have designed a chess program to beat the human world champion using the hardware they had.
Part of this is that in chess one is interested in being better than one’s opponent: sure you can’t search perfectly 50 moves ahead, but you don’t have to play against an infinite-computing-power brute-force search, you have to play against humans and other computer programs. Finance, computer security, many aspects of military affairs, and other adversarial domains are pretty important. If you could predict the weather a few days further than others, you could make a fortune trading commodities and derivatives.
Another element is that humans are far from optimized to use their computation for chess-playing, which is likely true for many of the other activities of modern civilization.
Also, there’s empirical evidence from history and firm R&D investments that human research suffers from serial speed limits of human minds, i.e. one gets more progress from doubling time to work than the size of the workforce. This is most true in areas like mathematics, cryptography, and computer science, less true in areas demanding physical infrastructure built using the outputs of many fields and physically rate-limited processes. But if one can rush forward on those elements, there would then be an unprecedented surge of ability to advance the more reluctant physical technologies.